Investigating the Practical Consequences of Model Misfit in Unidimensional IRT Models

نویسندگان

  • Daniela R. Crişan
  • Jorge N. Tendeiro
  • Rob R. Meijer
چکیده

In this article, the practical consequences of violations of unidimensionality on selection decisions in the framework of unidimensional item response theory (IRT) models are investigated based on simulated data. The factors manipulated include the severity of violations, the proportion of misfitting items, and test length. The outcomes that were considered are the precision and accuracy of the estimated model parameters, the correlations of estimated ability ([Formula: see text]) and number-correct ([Formula: see text]) scores with the true ability ([Formula: see text]), the ranks of the examinees and the overlap between sets of examinees selected based on either [Formula: see text], [Formula: see text], or [Formula: see text] scores, and the bias in criterion-related validity estimates. Results show that the [Formula: see text] values were unbiased by violations of unidimensionality, but their precision decreased as multidimensionality and the proportion of misfitting items increased; the estimated item parameters were robust to violations of unidimensionality. The correlations between [Formula: see text], [Formula: see text], and [Formula: see text] scores, the agreement between the three selection criteria, and the accuracy of criterion-related validity estimates are all negatively affected, to some extent, by increasing levels of multidimensionality and the proportion of misfitting items. However, removing the misfitting items only improved the results in the case of severe multidimensionality and large proportion of misfitting items, and deteriorated them otherwise.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Practical Consequences of Item Response Theory Model Misfit in the Context of Test Equating with Mixed-Format Test Data

In item response theory (IRT) models, assessing model-data fit is an essential step in IRT calibration. While no general agreement has ever been reached on the best methods or approaches to use for detecting misfit, perhaps the more important comment based upon the research findings is that rarely does the research evaluate IRT misfit by focusing on the practical consequences of misfit. The stu...

متن کامل

Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-worl...

متن کامل

Diagnosing item score patterns on a test using item response theory-based person-fit statistics.

Person-fit statistics have been proposed to investigate the fit of an item score pattern to an item response theory (IRT) model. The author investigated how these statistics can be used to detect different types of misfit. Intelligence test data were analyzed using person-fit statistics in the context of the G. Rasch (1960) model and R. J. Mokken's (1971, 1997) IRT models. The effect of the cho...

متن کامل

Immediate list recall as a measure of short-term episodic memory: insights from the serial position effect and item response theory.

The serial position effect shows that two interrelated cognitive processes underlie immediate recall of a supraspan word list. The current study used item response theory (IRT) methods to determine whether the serial position effect poses a threat to the construct validity of immediate list recall as a measure of verbal episodic memory. Archival data were obtained from a national sample of 4,21...

متن کامل

Multidimensional Computerized Adaptive Testing Based on Bayesian Theory

Effective and efficient assessment of a learner’s proficiency has always been a high priority for intelligent e-Learning environments. The fields of psychometrics and Computer Adaptive Testing (CAT) provide a strong theoretical and practical basis for performing skills assessment, of which Item Response Theory (IRT) is the best recognized approach. For assessing multiple skills at once, which i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره 41  شماره 

صفحات  -

تاریخ انتشار 2017